State-of-the-art 3D semantic segmentation models are trained on the off-the-shelf public benchmarks, but they often face the major challenge when these well-trained models are deployed to a new domain. In this paper, we propose an Active-and-Adaptive Segmentation (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model, and bridge the point distribution gap between domains. Specifically, before the cross-domain adaptation stage begins, ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation, reducing the adaptation difficulty under 3D scenarios. Benefiting from the rise of multi-modal 2D-3D datasets, ADAS utilizes a cross-modal attention-based feature fusion module that can extract a representative pair of image features and point features to achieve a bi-directional image-point feature interaction for better safe adaptation. Experimentally, ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA) which means that only a few unlabeled samples are available in the unlabeled target domain; 3) Active Domain Adaptation (ADA) which means that the selected target samples by ADAS are manually annotated. Their results demonstrate that ADAS achieves a significant accuracy gain by easily coupling ADAS with self-training methods or off-the-shelf UDA works.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.
translated by 谷歌翻译
Nesterov's accelerated gradient descent (NAG) is one of the milestones in the history of first-order algorithms. It was not successfully uncovered until the high-resolution differential equation framework was proposed in [Shi et al., 2022] that the mechanism behind the acceleration phenomenon is due to the gradient correction term. To deepen our understanding of the high-resolution differential equation framework on the convergence rate, we continue to investigate NAG for the $\mu$-strongly convex function based on the techniques of Lyapunov analysis and phase-space representation in this paper. First, we revisit the proof from the gradient-correction scheme. Similar to [Chen et al., 2022], the straightforward calculation simplifies the proof extremely and enlarges the step size to $s=1/L$ with minor modification. Meanwhile, the way of constructing Lyapunov functions is principled. Furthermore, we also investigate NAG from the implicit-velocity scheme. Due to the difference in the velocity iterates, we find that the Lyapunov function is constructed from the implicit-velocity scheme without the additional term and the calculation of iterative difference becomes simpler. Together with the optimal step size obtained, the high-resolution differential equation framework from the implicit-velocity scheme of NAG is perfect and outperforms the gradient-correction scheme.
translated by 谷歌翻译
Pre-trained language models are trained on large-scale unsupervised data, and they can be fine-tuned on small-scale labeled datasets and achieve good results. Multilingual pre-trained language models can be trained on multiple languages and understand multiple languages at the same time. At present, the research on pre-trained models mainly focuses on rich-resource language, while there is relatively little research on low-resource languages such as minority languages, and the public multilingual pre-trained language model can not work well for minority languages. Therefore, this paper constructs a multilingual pre-trained language model named MiLMo that performs better on minority language tasks, including Mongolian, Tibetan, Uyghur, Kazakh and Korean. To solve the problem of scarcity of datasets on minority languages and verify the effectiveness of the MiLMo model, this paper constructs a minority multilingual text classification dataset named MiTC, and trains a word2vec model for each language. By comparing the word2vec model and the pre-trained model in the text classification task, this paper provides an optimal scheme for the downstream task research of minority languages. The final experimental results show that the performance of the pre-trained model is better than that of the word2vec model, and it has achieved the best results in minority multilingual text classification. The multilingual pre-trained language model MiLMo, multilingual word2vec model and multilingual text classification dataset MiTC are published on https://milmo.cmli-nlp.com.
translated by 谷歌翻译
在一阶算法的历史中,Nesterov的加速梯度下降(NAG)是里程碑之一。但是,长期以来,加速的原因一直是一个谜。直到[Shi等,2021]中提出的高分辨率微分方程框架之前,梯度校正的存在尚未得到揭示。在本文中,我们继续研究加速现象。首先,我们基于精确的观察结果和$ L $ SMOTH功能的不等式提供了明显的简化证明。然后,提出了一个新的隐式高分辨率差分方程框架,以及相应的隐式 - 速度版本的相位空间表示和lyapunov函数,以研究迭代序列$ \ {x_k \} _的迭代序列的收敛行为{k = 0}^{\ infty} $的nag。此外,从两种类型的相空间表示形式中,我们发现梯度校正所起的作用等同于按速度隐含在梯度中包含的作用,其中唯一的区别来自迭代序列$ \ \ {y_ {y_ {k} \} _ {k = 0}^{\ infty} $由$ \ {x_k \} _ {k = 0}^{\ infty} $代替。最后,对于NAG的梯度规范最小化是否具有更快的速率$ O(1/K^3)$的开放问题,我们为证明提供了一个积极的答案。同时,为$ r> 2 $显示了目标值最小化$ o(1/k^2)$的更快的速度。
translated by 谷歌翻译
在本文中,我们提出了Satformer,这是一种基于新颖的变压器解决方案,可用于布尔(SAT)解决方案。与现有的基于学习的SAT求解器不同,在问题实例级别上学习的satformer学习了难以满足的问题实例的最低限度不满意的内核(MUC),这些实例为这些问题的因果关系提供了丰富的信息。具体而言,我们应用图形神经网络(GNN)以在连接正常格式(CNF)中获得条款的嵌入。层次变压器体系结构应用于子句嵌入以捕获条款之间的关系,并且当组成UNSAT核心的条款在一起时,自我发项权的权重被学到了很高,并将其设置为低。通过这样做,Satformer有效地了解了SAT预测条款之间的相关性。实验结果表明,Satformer比现有的基于端到端学习的SAT求解器更强大。
translated by 谷歌翻译
许多数据分析任务在很大程度上依赖对表的深入了解(多维数据)。在整个任务中,都存在表字段 /列的共同使用的元数据属性。在本文中,我们确定了四个这样的分析元数据:测量/维度二分法,公共场作用,语义场类型和默认聚集函数。尽管这些元数据面临不足的监督信号的挑战,利用现有的知识和理解分布。为了将这些元数据推理为原始表,我们提出了多任务元数据模型,该模型将现场分布和知识图信息融合到预训练的表格模型中。对于模型培训和评估,我们通过使用下游任务的各种智能监督来收集分析元数据的大型语料库(来自私人电子表格和公共表格数据集的〜582K表)。我们的最佳模型的精度= 98%,命中率在TOP-1> 67%,精度> 80%和四个分析元数据推理任务的精度= 88%。它的表现优于基于规则,传统机器学习方法和预训练的表格模型的一系列基线。分析元数据模型被部署在流行的数据分析产品中,帮助下游智能功能,例如Insights挖掘,图表 /枢轴表建议和自然语言QA ...
translated by 谷歌翻译
最近,深度加固学习(RL)在机器人操作应用中表现出了一些令人印象深刻的成功。但是,由于样本效率和安全性问题,现实世界中的培训机器人是不平凡的。提出了SIM到现实的转移来解决上述问题,但引入了一个名为“现实差距”的新问题。在这项工作中,我们通过使用单个摄像头的输入来解决上述问题,为基于视觉的组装任务引入SIM模型学习框架,并在模拟环境中进行培训。我们提出了一种基于循环一致的生成对抗网络(CycleGAN)和力量控制转移方法来弥合现实差距的域适应方法。我们证明,在模拟环境中训练有训练的拟议框架可以成功地转移到真实的孔洞设置中。
translated by 谷歌翻译
在本文中,我们考虑了基于系数的正则分布回归,该回归旨在从概率措施中回归到复制的内核希尔伯特空间(RKHS)的实现响应(RKHS),该响应将正则化放在系数上,而内核被假定为无限期的。 。该算法涉及两个采样阶段,第一阶段样本由分布组成,第二阶段样品是从这些分布中获得的。全面研究了回归函数的不同规律性范围内算法的渐近行为,并通过整体操作员技术得出学习率。我们在某些温和条件下获得最佳速率,这与单级采样的最小最佳速率相匹配。与文献中分布回归的内核方法相比,所考虑的算法不需要内核是对称的和阳性的半明确仪,因此为设计不确定的内核方法提供了一个简单的范式,从而丰富了分布回归的主题。据我们所知,这是使用不确定核进行分配回归的第一个结果,我们的算法可以改善饱和效果。
translated by 谷歌翻译